337 research outputs found

    Deep Expander Networks: Efficient Deep Networks from Graph Theory

    Full text link
    Efficient CNN designs like ResNets and DenseNet were proposed to improve accuracy vs efficiency trade-offs. They essentially increased the connectivity, allowing efficient information flow across layers. Inspired by these techniques, we propose to model connections between filters of a CNN using graphs which are simultaneously sparse and well connected. Sparsity results in efficiency while well connectedness can preserve the expressive power of the CNNs. We use a well-studied class of graphs from theoretical computer science that satisfies these properties known as Expander graphs. Expander graphs are used to model connections between filters in CNNs to design networks called X-Nets. We present two guarantees on the connectivity of X-Nets: Each node influences every node in a layer in logarithmic steps, and the number of paths between two sets of nodes is proportional to the product of their sizes. We also propose efficient training and inference algorithms, making it possible to train deeper and wider X-Nets effectively. Expander based models give a 4% improvement in accuracy on MobileNet over grouped convolutions, a popular technique, which has the same sparsity but worse connectivity. X-Nets give better performance trade-offs than the original ResNet and DenseNet-BC architectures. We achieve model sizes comparable to state-of-the-art pruning techniques using our simple architecture design, without any pruning. We hope that this work motivates other approaches to utilize results from graph theory to develop efficient network architectures.Comment: ECCV'1

    Balanced Allocations and Double Hashing

    Full text link
    Double hashing has recently found more common usage in schemes that use multiple hash functions. In double hashing, for an item xx, one generates two hash values f(x)f(x) and g(x)g(x), and then uses combinations (f(x)+kg(x))modn(f(x) +k g(x)) \bmod n for k=0,1,2,...k=0,1,2,... to generate multiple hash values from the initial two. We first perform an empirical study showing that, surprisingly, the performance difference between double hashing and fully random hashing appears negligible in the standard balanced allocation paradigm, where each item is placed in the least loaded of dd choices, as well as several related variants. We then provide theoretical results that explain the behavior of double hashing in this context.Comment: Further updated, small improvements/typos fixe

    Simulating Auxiliary Inputs, Revisited

    Get PDF
    For any pair (X,Z)(X,Z) of correlated random variables we can think of ZZ as a randomized function of XX. Provided that ZZ is short, one can make this function computationally efficient by allowing it to be only approximately correct. In folklore this problem is known as \emph{simulating auxiliary inputs}. This idea of simulating auxiliary information turns out to be a powerful tool in computer science, finding applications in complexity theory, cryptography, pseudorandomness and zero-knowledge. In this paper we revisit this problem, achieving the following results: \begin{enumerate}[(a)] We discuss and compare efficiency of known results, finding the flaw in the best known bound claimed in the TCC'14 paper "How to Fake Auxiliary Inputs". We present a novel boosting algorithm for constructing the simulator. Our technique essentially fixes the flaw. This boosting proof is of independent interest, as it shows how to handle "negative mass" issues when constructing probability measures in descent algorithms. Our bounds are much better than bounds known so far. To make the simulator (s,ϵ)(s,\epsilon)-indistinguishable we need the complexity O(s25ϵ2)O\left(s\cdot 2^{5\ell}\epsilon^{-2}\right) in time/circuit size, which is better by a factor ϵ2\epsilon^{-2} compared to previous bounds. In particular, with our technique we (finally) get meaningful provable security for the EUROCRYPT'09 leakage-resilient stream cipher instantiated with a standard 256-bit block cipher, like AES256\mathsf{AES256}.Comment: Some typos present in the previous version have been correcte

    Simple extractors via constructions of cryptographic pseudo-random generators

    Full text link
    Trevisan has shown that constructions of pseudo-random generators from hard functions (the Nisan-Wigderson approach) also produce extractors. We show that constructions of pseudo-random generators from one-way permutations (the Blum-Micali-Yao approach) can be used for building extractors as well. Using this new technique we build extractors that do not use designs and polynomial-based error-correcting codes and that are very simple and efficient. For example, one extractor produces each output bit separately in O(log2n)O(\log^2 n) time. These extractors work for weak sources with min entropy λn\lambda n, for arbitrary constant λ>0\lambda > 0, have seed length O(log2n)O(\log^2 n), and their output length is nλ/3\approx n^{\lambda/3}.Comment: 21 pages, an extended abstract will appear in Proc. ICALP 2005; small corrections, some comments and references adde

    A New Approximate Min-Max Theorem with Applications in Cryptography

    Full text link
    We propose a novel proof technique that can be applied to attack a broad class of problems in computational complexity, when switching the order of universal and existential quantifiers is helpful. Our approach combines the standard min-max theorem and convex approximation techniques, offering quantitative improvements over the standard way of using min-max theorems as well as more concise and elegant proofs

    Modulus Computational Entropy

    Full text link
    The so-called {\em leakage-chain rule} is a very important tool used in many security proofs. It gives an upper bound on the entropy loss of a random variable XX in case the adversary who having already learned some random variables Z1,,ZZ_{1},\ldots,Z_{\ell} correlated with XX, obtains some further information Z+1Z_{\ell+1} about XX. Analogously to the information-theoretic case, one might expect that also for the \emph{computational} variants of entropy the loss depends only on the actual leakage, i.e. on Z+1Z_{\ell+1}. Surprisingly, Krenn et al.\ have shown recently that for the most commonly used definitions of computational entropy this holds only if the computational quality of the entropy deteriorates exponentially in (Z1,,Z)|(Z_{1},\ldots,Z_{\ell})|. This means that the current standard definitions of computational entropy do not allow to fully capture leakage that occurred "in the past", which severely limits the applicability of this notion. As a remedy for this problem we propose a slightly stronger definition of the computational entropy, which we call the \emph{modulus computational entropy}, and use it as a technical tool that allows us to prove a desired chain rule that depends only on the actual leakage and not on its history. Moreover, we show that the modulus computational entropy unifies other,sometimes seemingly unrelated, notions already studied in the literature in the context of information leakage and chain rules. Our results indicate that the modulus entropy is, up to now, the weakest restriction that guarantees that the chain rule for the computational entropy works. As an example of application we demonstrate a few interesting cases where our restricted definition is fulfilled and the chain rule holds.Comment: Accepted at ICTS 201

    Asymptotic entanglement in a two-dimensional quantum walk

    Full text link
    The evolution operator of a discrete-time quantum walk involves a conditional shift in position space which entangles the coin and position degrees of freedom of the walker. After several steps, the coin-position entanglement (CPE) converges to a well defined value which depends on the initial state. In this work we provide an analytical method which allows for the exact calculation of the asymptotic reduced density operator and the corresponding CPE for a discrete-time quantum walk on a two-dimensional lattice. We use the von Neumann entropy of the reduced density operator as an entanglement measure. The method is applied to the case of a Hadamard walk for which the dependence of the resulting CPE on initial conditions is obtained. Initial states leading to maximum or minimum CPE are identified and the relation between the coin or position entanglement present in the initial state of the walker and the final level of CPE is discussed. The CPE obtained from separable initial states satisfies an additivity property in terms of CPE of the corresponding one-dimensional cases. Non-local initial conditions are also considered and we find that the extreme case of an initial uniform position distribution leads to the largest CPE variation.Comment: Major revision. Improved structure. Theoretical results are now separated from specific examples. Most figures have been replaced by new versions. The paper is now significantly reduced in size: 11 pages, 7 figure

    Formalizing Data Deletion in the Context of the Right to be Forgotten

    Get PDF
    The right of an individual to request the deletion of their personal data by an entity that might be storing it -- referred to as the right to be forgotten -- has been explicitly recognized, legislated, and exercised in several jurisdictions across the world, including the European Union, Argentina, and California. However, much of the discussion surrounding this right offers only an intuitive notion of what it means for it to be fulfilled -- of what it means for such personal data to be deleted. In this work, we provide a formal definitional framework for the right to be forgotten using tools and paradigms from cryptography. In particular, we provide a precise definition of what could be (or should be) expected from an entity that collects individuals' data when a request is made of it to delete some of this data. Our framework captures several, though not all, relevant aspects of typical systems involved in data processing. While it cannot be viewed as expressing the statements of current laws (especially since these are rather vague in this respect), our work offers technically precise definitions that represent possibilities for what the law could reasonably expect, and alternatives for what future versions of the law could explicitly require. Finally, with the goal of demonstrating the applicability of our framework and definitions, we consider various natural and simple scenarios where the right to be forgotten comes up. For each of these scenarios, we highlight the pitfalls that arise even in genuine attempts at implementing systems offering deletion guarantees, and also describe technological solutions that provably satisfy our definitions. These solutions bring together techniques built by various communities

    A multicenter case registry study on medication-related osteonecrosis of the jaw in patients with advanced cancer

    Get PDF
    PURPOSE: This observational case registry study was designed to describe the natural history of cancer patients with medication-related osteonecrosis of the jaw (ONJ) and evaluate the ONJ resolution rate. METHODS: Adults with a diagnosis of cancer and with a new diagnosis of ONJ were enrolled and evaluated by a dental specialist at baseline and every 3 months for 2 years and then every 6 months for 3 years until death, consent withdrawal, or loss to follow-up. The primary endpoint was the rate and time course of ONJ resolution. Secondary endpoints included frequency of incident ONJ risk factors, ONJ treatment patterns, and treatment patterns of antiresorptive agents for subsequent ONJ. RESULTS: Overall, 327 patients were enrolled; 207 (63%) were continuing on study at data cutoff. Up to 69% of evaluable patients with ONJ had resolution or improvement during the study. ONJ resolution (AAOMS ONJ staging criteria) was observed in 114 patients (35%); median (interquartile range) time from ONJ onset to resolution was 7.3 (4.5-11.4) months. Most patients (97%) had received antiresorptive medication before ONJ development, 9 patients (3%) had not; 68% had received zoledronic acid, 38% had received denosumab, and 10% had received pamidronate (56% had received bisphosphonates only, 18% had received denosumab only, and 21% had exposure to both). CONCLUSIONS: These results are consistent with those observed in clinical trials evaluating skeletal-related events in patients with advanced malignancy involving bone. Longer follow-up will provide further information on ONJ recurrence and resolution rates between medically and surgically managed patients

    The effect of large-decoherence on mixing-time in Continuous-time quantum walks on long-range interacting cycles

    Full text link
    In this paper, we consider decoherence in continuous-time quantum walks on long-range interacting cycles (LRICs), which are the extensions of the cycle graphs. For this purpose, we use Gurvitz's model and assume that every node is monitored by the corresponding point contact induced the decoherence process. Then, we focus on large rates of decoherence and calculate the probability distribution analytically and obtain the lower and upper bounds of the mixing time. Our results prove that the mixing time is proportional to the rate of decoherence and the inverse of the distance parameter (\emph{m}) squared. This shows that the mixing time decreases with increasing the range of interaction. Also, what we obtain for \emph{m}=0 is in agreement with Fedichkin, Solenov and Tamon's results \cite{FST} for cycle, and see that the mixing time of CTQWs on cycle improves with adding interacting edges.Comment: 16 Pages, 2 Figure
    corecore